|
A massively parallel high-performance computing (HPC) system is particularly sensitive to operating system overhead. Traditional multi-purpose operating systems are designed to support a wide range of usage models and requirements. To support the range of needs, a large number of system processes are provided and are often inter-dependent on each other. The computing overhead of these processes leads to an unpredictable amount of processor time available to a parallel application. A very common parallel programming model is referred to as the bulk synchronous parallel model which often employs Message Passing Interface (MPI) for communication. The synchronization events are made at specific points in the application code. If one processor takes longer to reach that point than all the other processors, everyone must wait. The overall finish time is increased. Unpredictable operating system overhead is one significant reason a processor might take longer to reach the synchronization point than the others. Custom lightweight kernel (LWK) operating systems, currently used on some of the fastest computers in the world, help alleviate this problem. The IBM Blue Gene line of supercomputers runs various versions of CNK operating system. The Cray XT4 and Cray XT5 supercomputers run Compute Node Linux. Sandia National Laboratories has an almost two-decade commitment to lightweight kernels on its high-end HPC systems. Sandia and University of New Mexico researchers began work on SUNMOS for the Intel Paragon in the early 1990s. This operating system evolved into the Puma, Cougar, and Catamount operating systems deployed on ASCI Red and Red Storm. Sandia continues its work in LWKs with a new R&D effort, called kitten. The design goals of these operating systems are: * Targeted at massively parallel environments composed of thousands of processors with distributed memory and a tightly coupled network. * Provide necessary support for scalable, performance-oriented scientific applications. * Offer a suitable development environment for parallel applications and libraries. * Emphasize efficiency over functionality. * Maximize the amount of resources (e.g. CPU, memory, and network bandwidth) allocated to the application. * Seek to minimize time to completion for the application. LWK implementations vary, but all strive to provide applications with predictable and maximum access to the CPU and other system resources. To achieve this, simplified algorithms for scheduling and memory management are usually included. System services (e.g. daemons), are limited to the absolute minimum. Available services, such as job launch are constructed in a hierarchical fashion to ensure scalability to thousands of nodes. Networking protocols for communication between nodes in the system are also carefully selected and implemented to ensure scalability. One such example is the Portals network programming API. Lightweight kernel operating systems assume access to a small set of nodes that are running full-service operating systems to offload some of the necessary services: login access, compilation environments, batch job submission, and file I/O. By restricting services to only those that are absolutely necessary and by streamlining those that are provided, the overhead (sometimes called noise) of the lightweight operating system is minimized. This allows a significant ''and'' predictable amount of the processor cycles to be given to the parallel application. Since the application can make consistent forward progress on each processor, they will reach their synchronization points at the same time. Lost wait time is reduced. == References == 抄文引用元・出典: フリー百科事典『 ウィキペディア(Wikipedia)』 ■ウィキペディアで「Lightweight Kernel Operating System」の詳細全文を読む スポンサード リンク
|